创伤干预的阳性结果取决于插入的金属植入物的术中评价。由于金属伪影,该评估的质量大大取决于所谓的金属伪影减少方法(MAR)的性能。这些MAR方法中的大多数需要先前的插入金属物体分割。因此,尽管存在一些主要缺点,但是,施加在重建的3D体积中的基于基于阈值的分割方法的通常。利用本出版物,研究了将分割任务转移到基于学习的基于学习的视图 - 一致的2D投影的方法的可能性。为了分割本金属,研究了使用在CADaVer研究期间获得的真实数据进行培训的基于基于学习的2D投影明智的分割网络。为了克服与2D投影明智分割的缺点,提出了一种一致性滤波器。通过使用新的分段掩码将标准FSMAR的结果与修改后的FSMAR版本进行比较,研究了移位分割域的影响。对真实尸体数据进行定量和定性评估,调查方法显示了MAR性能增加和对金属伪影的不敏感性。对于重建外部的金属外部的金属或消失金属外壳的情况,可以显示伪影的显着降低。因此,增加到大约3 dB w.r.t.实现了所有切片的平均PSNR度量,单切片最多9 dB。所示结果揭示了转变对基于2D的分段方法的有益影响,以便使用MAS方法的下游使用的真实数据。
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Telling stories is an integral part of human communication which can evoke emotions and influence the affective states of the audience. Automatically modelling emotional trajectories in stories has thus attracted considerable scholarly interest. However, as most existing works have been limited to unsupervised dictionary-based approaches, there is no labelled benchmark for this task. We address this gap by introducing continuous valence and arousal annotations for an existing dataset of children's stories annotated with discrete emotion categories. We collect additional annotations for this data and map the originally categorical labels to the valence and arousal space. Leveraging recent advances in Natural Language Processing, we propose a set of novel Transformer-based methods for predicting valence and arousal signals over the course of written stories. We explore several strategies for fine-tuning a pretrained ELECTRA model and study the benefits of considering a sentence's context when inferring its emotionality. Moreover, we experiment with additional LSTM and Transformer layers. The best configuration achieves a Concordance Correlation Coefficient (CCC) of .7338 for valence and .6302 for arousal on the test set, demonstrating the suitability of our proposed approach. Our code and additional annotations are made available at https://github.com/lc0197/emotion_modelling_stories.
translated by 谷歌翻译
Recent work has reported that AI classifiers trained on audio recordings can accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2) infection status. Here, we undertake a large scale study of audio-based deep learning classifiers, as part of the UK governments pandemic response. We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata, including reverse transcription polymerase chain reaction (PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission (REACT) randomised surveillance survey. In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854]) consistent with the findings of previous studies. However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon quantifying the utility of audio based classifiers in practical settings, we find them to be outperformed by simple predictive scores based on user reported symptoms.
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.
translated by 谷歌翻译
The existence of metallic implants in projection images for cone-beam computed tomography (CBCT) introduces undesired artifacts which degrade the quality of reconstructed images. In order to reduce metal artifacts, projection inpainting is an essential step in many metal artifact reduction algorithms. In this work, a hybrid network combining the shift window (Swin) vision transformer (ViT) and a convolutional neural network is proposed as a baseline network for the inpainting task. To incorporate metal information for the Swin ViT-based encoder, metal-conscious self-embedding and neighborhood-embedding methods are investigated. Both methods have improved the performance of the baseline network. Furthermore, by choosing appropriate window size, the model with neighborhood-embedding could achieve the lowest mean absolute error of 0.079 in metal regions and the highest peak signal-to-noise ratio of 42.346 in CBCT projections. At the end, the efficiency of metal-conscious embedding on both simulated and real cadaver CBCT data has been demonstrated, where the inpainting capability of the baseline network has been enhanced.
translated by 谷歌翻译
幽默是人类情感和认知的重要因素。它的自动理解可以促进更自然的人类设备互动和人工智能的人性化。当前的幽默检测方法仅基于分阶段数据,使其不适用于“现实世界”应用程序。我们通过引入新颖的Passau自发足球教练幽默(Passau-SFCH)数据集来解决这种缺陷,包括大约11个小时的录音。在马丁的幽默风格问卷中提出的幽默及其尺寸(情感和方向)的存在,请注释Passau-SFCH数据集。我们进行了一系列实验,采用了经过预定的变压器,卷积神经网络和专家设计的功能。分析了每种模式(文本,音频,视频)的表现,以进行自发幽默识别,并研究了它们的互补性。我们的发现表明,对于对幽默及其情感的自动分析,面部表情是最有希望的,而幽默方向可以通过基于文本的功能进行建模。结果揭示了各种主题之间的差异,突出了幽默用法和风格的个性。此外,我们观察到决策级融合会产生最佳认可结果。最后,我们在https://www.github.com/eihw/passau-sfch上公开代码。可以根据要求获得Passau-SFCH数据集。
translated by 谷歌翻译
声乐爆发在交流情感中起着重要的作用,使它们对于改善语音情感识别很有价值。在这里,我们介绍了我们在ACII情感声乐爆发工作室和挑战2022(A-VB)中预测声音爆发并预测其情感意义的方法。我们使用大型的自我监督音频模型作为共享的功能提取器,并比较在分类器链和注意力网络上构建的多个体系结构,并结合不确定性减少减肥策略。我们的方法超过了所有四个任务的挑战基线。
translated by 谷歌翻译
识别面部视频的连续情绪和动作单元(AU)强度需要对表达动态的空间和时间理解。现有作品主要依赖2D面的外观来提取这种动态。这项工作着重于基于参数3D面向形状模型的有希望的替代方案,该模型解散了不同的变异因素,包括表达诱导的形状变化。我们旨在了解与最先进的2D外观模型相比,在估计价值和AU强度方面表现性3D面部形状如何。我们基准了四个最近的3D面对准模型:Expnet,3DDFA-V2,DECA和EMOCA。在价值估计中,3D面模型的表达特征始终超过以前的作品,并在SEWA和AVEC 2019 CES CORPORA上的平均一致性相关性分别为.739和.574。我们还研究了BP4D和DISFA数据集的AU强度估计的3D面形状如何执行,并报告说3D脸部功能在AUS 4、6、10、12和25中与2D外观特征相当,但没有整个集合。 aus。为了理解这种差异,我们在价值和AUS之间进行了对应分析,该分析指出,准确的价值预测可能仅需要少数AU的知识。
translated by 谷歌翻译